probabilistic information
Probabilities-Informed Machine Learning
As a natural evolution of traditional regression methods [3], ML models such as Support Vector Regression (SVR) [4] and Artificial Neural Networks (ANN) [5] have been developed to handle non-linear relationships and highdimensional datasets [6] with increasing accuracy and robustness. For instance, SVR has proven to be a robust regression tool because it can generalize well with limited data and capture nonlinear relationships using kernel functions [7]. Similarly, ANN, inspired by the neural architecture of the human brain, has become foundational to ML [5]. Typically, these methods use inputs (X) and outputs (Y) to construct surrogate models that aim to minimize the difference between the predicted and actual output values. These models have found applications across diverse fields, including engineering, medicine, and economics, demonstrating their versatility and potential [8], [9], [10]. In many real-world applications, additional prior information regarding the output model can be leveraged to enhance its accuracy and robustness [11] [12]. For instance, in physical systems, knowledge of the governing laws of physics has been successfully incorporated into ML by developing physics-informed neural networks (PINNs) [13], leading to improved efficiency and accuracy in prediction tasks [14]. In addition to physical laws, probabilistic information about the structure of the problem may also exist in practical scenarios [15]. Moreover, in many systems, the output variable is inherently probabilistic, necessitating models to approximate the probabilistic structure of the output [16].
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.04)
- Asia > Middle East > Iran > Sistan and Baluchestan Province > Zahedan (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.88)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.66)
Probabilistic Computation in Spiking Populations
As animals interact with their environments, they must constantly update estimates about their states. Bayesian models combine prior probabil- ities, a dynamical model and sensory evidence to update estimates op- timally. These models are consistent with the results of many diverse psychophysical studies. However, little is known about the neural rep- resentation and manipulation of such Bayesian information, particularly in populations of spiking neurons. We consider this issue, suggesting a model based on standard neural architecture and activations.
Unforeseen Evidence
In this note, I propose a normative updating rule, extended Bayesianism, for the incorporation of probabilistic information arising from the process of becoming more aware. Extended Bayesianism generalizes standard Bayesian updating to allow the posterior to reside on richer probability space than the prior. I then provide an observable criterion on prior and posterior beliefs such that they were consistent with extended Bayesianism. Key words: extended Bayesianism; reverse Bayesianism; conditional expectations. Conditioning on Unforeseen Evidence Decision maker's (DM's) who are unaware, cannot conceive of, nor articulate, the decision relevant contingencies they are unaware of.
839
Automated diagnosis is an important AI problem not only for its potential practical applications but also because it exposes issues common to all automated reasoning efforts and presents real challenges to existing paradigms. Current research in this area addresses many problems, including managing and structuring probabilistic information, modeling physical systems, reasoning with defeasible assumptions, and interleaving deliberation and action. Furthermore, diagnosis programs must face these problems in contexts where scaling up to deal with cases of realistic size results in daunting combinatorics. This article presents these and other issues as discussed at the First International Workshop on Principles of Diagnosis. Diagnosis has historically provided an obliging rock for each succeeding generation of AI researchers to blunt their axes on.
Matthew L. Ginsberg
Arguments are presented in favor of the answer "yes". The intuitive appeal (or lack thereof) of probabilities is considered briefly. The theoretical adequacies of probabilistic methods are investigated by considering them in light of McCarthy's "typology of uses of non-monotonic reasoning." A quantitative approach which overcomes the usual need for a priori probabilities is presented. Some of the practical advantages of using probabilities in a production system are described.
Heuristic Programming Project 1982 Report No. HPP 82-38
Report 82 38 The Computer and Medical Decision Making: Stanford - KSL Good Advice is Not Enough. Reprinted, with permission, from Engineering in Medicine and Biology Magazine 1,1992. Mailing address: Medical Computer Science, Room TC-117, Division of General Internal Medicine, Stanford University School of Medicine, Stanford, California 94305. Dr. Shortliffe is recipient of Research Career Development Award LM-00048 from the National Library of Medicine. Much of the training of physicians is designed to facilitate optimal, informed clinical decision making.
Distributionally Robust Markov Decision Processes
We consider Markov decision processes where the values of the parameters are uncertain. This uncertainty is described by a sequence of nested sets (that is, each set contains the previous one), each of which corresponds to a probabilistic guarantee for a different confidence level so that a set of admissible probability distributions of the unknown parameters is specified. This formulation models the case where the decision maker is aware of and wants to exploit some (yet imprecise) a-priori information of the distribution of parameters, and arises naturally in practice where methods to estimate the confidence region of parameters abound. We propose a decision criterion based on *distributional robustness*: the optimal policy maximizes the expected total reward under the most adversarial probability distribution over realizations of the uncertain parameters that is admissible (i.e., it agrees with the a-priori information). We show that finding the optimal distributionally robust policy can be reduced to a standard robust MDP where the parameters belong to a single uncertainty set, hence it can be computed in polynomial time under mild technical conditions.
- Asia > Middle East > Israel (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
Optimal Change-Detection and Spiking Neurons
Survival in a non-stationary, potentially adversarial environment requires animals to detect sensory changes rapidly yet accurately, two oft competing desiderata. Neurons subserving such detections are faced with the corresponding challenge to discern "real" changes in inputs as quickly as possible, while ignoring noisy fluctuations. Mathematically, this is an example of a change-detection problem that is actively researched in the controlled stochastic processes community. In this paper, we utilize sophisticated tools developed in that community to formalize an instantiation of the problem faced by the nervous system, and characterize the Bayes-optimal decision policy under certain assumptions. We will derive from this optimal strategy an information accumulation and decision process that remarkably resembles the dynamics of a leaky integrate-and-fire neuron. This correspondence suggests that neurons are optimized for tracking input changes, and sheds new light on the computational import of intracellular properties such as resting membrane potential, voltage-dependent conductance, and post-spike reset voltage. We also explore the influence that factors such as timing, uncertainty, neuromodulation, and reward should and do have on neuronal dynamics and sensitivity, as the optimal decision strategy depends critically on these factors.
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)